87 research outputs found

    Adaptation to motor-visual and motor-auditory temporal lags transfer across modalities

    Get PDF
    Previous research has shown that the timing of a sensor-motor event is recalibrated after a brief exposure to a delayed feedback of a voluntary action (Stetson et al. 2006). Here, we examined whether it is the sensory or motor event that is shifted in time. We compared lag adaption for action-feedback in visuo-motor pairs and audio-motor pairs using an adaptation-test paradigm. Participants were exposed to a constant lag (50 or 150 ms) between their voluntary action (finger tap) and its sensory feedback (flash or tone pip) during an adaptation period (~3 min). Immediately after that, they performed a temporal order judgment (TOJ) task about the tap-feedback test stimulus pairings. The modality of the feedback stimulus was either the same as the adapted one (within-modal) or different (cross-modal). The results showed that the point of subjective simultaneity (PSS) was uniformly shifted in the direction of the exposed lag within and across modalities (motor-visual, motor-auditory). This suggests that the TRE of sensor-motor events is mainly caused by a shift in the motor component

    Phonetic recalibration of speech by text

    Get PDF
    Listeners adjust their phonetic categories to cope with variations in the speech signal (phonetic recalibration). Previous studies have shown that lipread speech (and word knowledge) can adjust the perception of ambiguous speech and can induce phonetic adjustments (Bertelson, Vroomen, & de Gelder in Psychological Science, 14(6), 592–597, 2003; Norris, McQueen, & Cutler in Cognitive Psychology, 47(2), 204–238, 2003). We examined whether orthographic information (text) also can induce phonetic recalibration. Experiment 1 showed that after exposure to ambiguous speech sounds halfway between /b/ and /d/ that were combined with text (b or d) participants were more likely to categorize auditory-only test sounds in accordance with the exposed letters. Experiment 2 replicated this effect with a very short exposure phase. These results show that listeners adjust their phonetic boundaries in accordance with disambiguating orthographic information and that these adjustments show a rapid build-up

    No effect of synesthetic congruency on temporal ventriloquism

    Get PDF
    A sound presented in temporal proximity to a light can alter the perceived temporal occurrence of that light (temporal ventriloquism). Recent studies have suggested that pitch–size synesthetic congruency (i.e., a natural association between the relative pitch of a sound and the relative size of a visual stimulus) might affect this phenomenon. To reexamine this, participants made temporal order judgements about small- and large-sized visual stimuli while high- or low-pitched tones were presented before the first and after the second light. We replicated a previous study showing that, at large sound–light intervals, sensitivity for visual temporal order was better for synesthetically congruent than for incongruent pairs. However, this congruency effect could not be attributed to temporal ventriloquism, since it disappeared at short sound–light intervals, if compared with a synchronous audiovisual baseline condition that excluded response biases. In addition, synesthetic congruency did not affect temporal ventriloquism even if participants were made explicitly aware of congruency before testing. Our results thus challenge the view that synesthetic congruency affects temporal ventriloquism

    Exposure to delayed visual feedback of the hand changes motor-sensory synchrony perception

    Get PDF
    We examined whether the brain can adapt to temporal delays between a self-initiated action and the naturalistic visual feedback of that action. During an exposure phase, participants tapped with their index finger while seeing their own hand in real time (~0 ms delay) or delayed at 40, 80, or 120 ms. Following exposure, participants were tested with a simultaneity judgment (SJ) task in which they judged whether the video of their hand was synchronous or asynchronous with respect to their finger taps. The locations of the seen and the real hand were either different (Experiment 1) or aligned (Experiment 2). In both cases, the point of subjective simultaneity (PSS) was uniformly shifted in the direction of the exposure lags while sensitivity to visual-motor asynchrony decreased with longer exposure delays. These findings demonstrate that the brain is quite flexible in adjusting the timing relation between a motor action and the otherwise naturalistic visual feedback that this action engenders

    Interaction of perceptual grouping and crossmodal temporal capture in tactile apparent-motion

    Get PDF
    Previous studies have shown that in tasks requiring participants to report the direction of apparent motion, task-irrelevant mono-beeps can "capture'' visual motion perception when the beeps occur temporally close to the visual stimuli. However, the contributions of the relative timing of multimodal events and the event structure, modulating uni- and/or crossmodal perceptual grouping, remain unclear. To examine this question and extend the investigation to the tactile modality, the current experiments presented tactile two-tap apparent-motion streams, with an SOA of 400 ms between successive, left-/right-hand middle-finger taps, accompanied by task-irrelevant, non-spatial auditory stimuli. The streams were shown for 90 seconds, and participants' task was to continuously report the perceived (left-or rightward) direction of tactile motion. In Experiment 1, each tactile stimulus was paired with an auditory beep, though odd-numbered taps were paired with an asynchronous beep, with audiotactile SOAs ranging from -75 ms to 75 ms. Perceived direction of tactile motion varied systematically with audiotactile SOA, indicative of a temporal-capture effect. In Experiment 2, two audiotactile SOAs-one short (75 ms), one long (325 ms)-were compared. The long-SOA condition preserved the crossmodal event structure (so the temporal-capture dynamics should have been similar to that in Experiment 1), but both beeps now occurred temporally close to the taps on one side (even-numbered taps). The two SOAs were found to produce opposite modulations of apparent motion, indicative of an influence of crossmodal grouping. In Experiment 3, only odd-numbered, but not even-numbered, taps were paired with auditory beeps. This abolished the temporal-capture effect and, instead, a dominant percept of apparent motion from the audiotactile side to the tactile-only side was observed independently of the SOA variation. These findings suggest that asymmetric crossmodal grouping leads to an attentional modulation of apparent motion, which inhibits crossmodal temporal-capture effects

    No effect of auditory–visual spatial disparity on temporal recalibration

    Get PDF
    It is known that the brain adaptively recalibrates itself to small (∼100 ms) auditory–visual (AV) temporal asynchronies so as to maintain intersensory temporal coherence. Here we explored whether spatial disparity between a sound and light affects AV temporal recalibration. Participants were exposed to a train of asynchronous AV stimulus pairs (sound-first or light-first) with sounds and lights emanating from either the same or a different location. Following a short exposure phase, participants were tested on an AV temporal order judgement (TOJ) task. Temporal recalibration manifested itself as a shift of subjective simultaneity in the direction of the adapted audiovisual lag. The shift was equally big when exposure and test stimuli were presented from the same or different locations. These results provide strong evidence for the idea that spatial co-localisation is not a necessary constraint for intersensory pairing to occur

    Audiovisual time perception is spatially specific

    Get PDF
    Our sensory systems face a daily barrage of auditory and visual signals whose arrival times form a wide range of audiovisual asynchronies. These temporal relationships constitute an important metric for the nervous system when surmising which signals originate from common external events. Internal consistency is known to be aided by sensory adaptation: repeated exposure to consistent asynchrony brings perceived arrival times closer to simultaneity. However, given the diverse nature of our audiovisual environment, functionally useful adaptation would need to be constrained to signals that were generated together. In the current study, we investigate the role of two potential constraining factors: spatial and contextual correspondence. By employing an experimental design that allows independent control of both factors, we show that observers are able to simultaneously adapt to two opposing temporal relationships, provided they are segregated in space. No such recalibration was observed when spatial segregation was replaced by contextual stimulus features (in this case, pitch and spatial frequency). These effects provide support for dedicated asynchrony mechanisms that interact with spatially selective mechanisms early in visual and auditory sensory pathways

    Auditory grouping occurs prior to intersensory pairing: evidence from temporal ventriloquism

    Get PDF
    The authors examined how principles of auditory grouping relate to intersensory pairing. Two sounds that normally enhance sensitivity on a visual temporal order judgement task (i.e. temporal ventriloquism) were embedded in a sequence of flanker sounds which either had the same or different frequency (Exp. 1), rhythm (Exp. 2), or location (Exp. 3). In all experiments, we found that temporal ventriloquism only occurred when the two capture sounds differed from the flankers, demonstrating that grouping of the sounds in the auditory stream took priority over intersensory pairing. By combining principles of auditory grouping with intersensory pairing, we demonstrate that capture sounds were, counter-intuitively, more effective when their locations differed from that of the lights rather than when they came from the same position as the lights

    Sound can improve visual search in developmental dyslexia

    Get PDF
    We examined whether developmental dyslexic adults suffer from sluggish attentional shifting (SAS; Hari and Renvall in Trends Cogn Sci 5:525–532, 2001) by measuring their shifting of attention in a visual search task with dynamic cluttered displays (Van der Burg et al. in J Exp Psychol Human 34:1053–1065, 2008). Dyslexics were generally slower than normal readers in searching a horizontal or vertical target among oblique distracters. However, the addition of a click sound presented in synchrony with a color change of the target drastically improved their performance up to the level of the normal readers. These results are in line with the idea that developmental dyslexics have specific problems in disengaging attention from the current fixation, and that the phasic alerting by a sound can compensate for this deficit

    Depth cues and perceived audiovisual synchrony of biological motion

    Get PDF
    Due to their different propagation times, visual and auditory signals from external events arrive at the human sensory receptors with a disparate delay. This delay consistently varies with distance, but, despite such variability, most events are perceived as synchronic. There is, however, contradictory data and claims regarding the existence of compensatory mechanisms for distance in simultaneity judgments. Principal Findings: In this paper we have used familiar audiovisual events – a visual walker and footstep sounds – and manipulated the number of depth cues. In a simultaneity judgment task we presented a large range of stimulus onset asynchronies corresponding to distances of up to 35 meters. We found an effect of distance over the simultaneity estimates, with greater distances requiring larger stimulus onset asynchronies, and vision always leading. This effect was stronger when both visual and auditory cues were present but was interestingly not found when depth cues were impoverished. Significance: These findings reveal that there should be an internal mechanism to compensate for audiovisual delays, which critically depends on the depth information available.FEDERFundação para a Ciência e a Tecnologia (FCT
    corecore